Tech Tip: How to Keep Your Chatbot Activity More Private
2024-08-22
LRC
TXT
大字
小字
滚动
全页
1Operators of artificial intelligence (AI) chatbot tools have made it clear that users' requests can be saved and used to further develop the AI systems.
2But what if a user does not want that? Some tools permit users to request that personal information used in chatbot requests not be saved or used to develop or train AI systems.
3Technology experts say it could be too late for users who have already provided information to these tools to have the data removed.
4But the Associated Press is offering the following advice for users who want to increase their privacy protections.
5Google saves chatbot interactions, known as conversations, with its Gemini tool.
6The company says it uses the data to train its machine learning systems.
7But the company does give users a way to limit the information captured and to remove past conversations.
8For users 18 or older, requests are kept for 18 months, although this can be changed in user settings.
9Human workers are sometimes used by Google to examine some user conversations as part of efforts to improve Gemini's systems.
10In general, Google warns Gemini users not to enter any sensitive information they do not want human workers to see.
11Gemini users can change or "opt-out" of these default settings.
12From the main Gemini website page, users should find and click on the "Activity" button toward the bottom left of the page.
13From there, they can click the "Turn off" button next to the heading "Gemini Apps Activity."
14Users then have the chance to block future conversations from being saved.
15They can also choose to have all previous conversations removed.
16Whether a user chooses to turn their activity off or leave it on, Google notes that all conversations with Gemini are saved for 72 hours to "provide the service and process any feedback."
17Meta has an AI chatbot used across its social media services Facebook, WhatsApp and Instagram.
18The company says its AI models are trained on information shared by users including social media posts and photos.
19Meta says it does not train its AI systems on private messages sent by users to friends or family.
20Not everyone can opt out of this policy.
21People in the 27-nation European Union and Britain - both of which have strong privacy rules - can.
22This process can be completed from Meta's main Privacy Center.
23Click "Other Policies and Articles" from the list near the bottom on the left side, then click the part related to AI.
24Users can then find a link to a form to opt out.
25People in the United States and other countries without national data privacy laws do not have this ability.
26Meta's Privacy Center does link to a form where users can request that their data captured by third parties not be used to "develop and improve AI at Meta."
27But the company says these requests are examined before being acted upon and might be rejected based on local laws.
28With Microsoft's Copilot chatbot, personal users cannot opt out of having their data used to develop the company's AI models.
29The best a user can do is to remove conversations with the chatbot by going to Microsoft account's settings and privacy page.
30Find the drop-down choice called "Copilot interaction history" or "Copilot activity history" to find the button to remove the history.
31Users of OpenAI's ChatGPT service can make privacy changes from the tool's settings page.
32Find the "data controls" setting and remove the choice called "Improve the model for everyone."
33If a user does not have an account, they can click on the small question mark at the bottom right of the page.
34Then click "Settings" to see the same choice to opt out of AI training.
35OpenAI explains on its data controls "help page" that when users opt out, their conversations will still appear in the history but will not be used for training.
36The company says these temporary conversations will be kept for 30 days.
37Anthropic is an AI research company based in San Francisco.
38The company says its Claude AI tool is not trained on personal data.
39However, users can request permission for specific conversations to be used in training or not.
40Users can do this by giving the conversation a "thumbs up" or "thumbs down" or by emailing the company.
41I'm Bryan Lynn.
1Operators of artificial intelligence (AI) chatbot tools have made it clear that users' requests can be saved and used to further develop the AI systems. 2But what if a user does not want that? Some tools permit users to request that personal information used in chatbot requests not be saved or used to develop or train AI systems. 3Technology experts say it could be too late for users who have already provided information to these tools to have the data removed. But the Associated Press is offering the following advice for users who want to increase their privacy protections. 4Google Gemini 5Google saves chatbot interactions, known as conversations, with its Gemini tool. The company says it uses the data to train its machine learning systems. But the company does give users a way to limit the information captured and to remove past conversations. 6For users 18 or older, requests are kept for 18 months, although this can be changed in user settings. Human workers are sometimes used by Google to examine some user conversations as part of efforts to improve Gemini's systems. In general, Google warns Gemini users not to enter any sensitive information they do not want human workers to see. 7Gemini users can change or "opt-out" of these default settings. From the main Gemini website page, users should find and click on the "Activity" button toward the bottom left of the page. From there, they can click the "Turn off" button next to the heading "Gemini Apps Activity." Users then have the chance to block future conversations from being saved. They can also choose to have all previous conversations removed. 8Whether a user chooses to turn their activity off or leave it on, Google notes that all conversations with Gemini are saved for 72 hours to "provide the service and process any feedback." 9Meta AI 10Meta has an AI chatbot used across its social media services Facebook, WhatsApp and Instagram. The company says its AI models are trained on information shared by users including social media posts and photos. Meta says it does not train its AI systems on private messages sent by users to friends or family. 11Not everyone can opt out of this policy. People in the 27-nation European Union and Britain - both of which have strong privacy rules - can. This process can be completed from Meta's main Privacy Center. Click "Other Policies and Articles" from the list near the bottom on the left side, then click the part related to AI. Users can then find a link to a form to opt out. 12People in the United States and other countries without national data privacy laws do not have this ability. 13Meta's Privacy Center does link to a form where users can request that their data captured by third parties not be used to "develop and improve AI at Meta." But the company says these requests are examined before being acted upon and might be rejected based on local laws. 14Microsoft Copilot 15With Microsoft's Copilot chatbot, personal users cannot opt out of having their data used to develop the company's AI models. The best a user can do is to remove conversations with the chatbot by going to Microsoft account's settings and privacy page. Find the drop-down choice called "Copilot interaction history" or "Copilot activity history" to find the button to remove the history. 16OpenAI's ChatGPT 17Users of OpenAI's ChatGPT service can make privacy changes from the tool's settings page. Find the "data controls" setting and remove the choice called "Improve the model for everyone." If a user does not have an account, they can click on the small question mark at the bottom right of the page. Then click "Settings" to see the same choice to opt out of AI training. 18OpenAI explains on its data controls "help page" that when users opt out, their conversations will still appear in the history but will not be used for training. The company says these temporary conversations will be kept for 30 days. 19Anthropic's Claude AI 20Anthropic is an AI research company based in San Francisco. The company says its Claude AI tool is not trained on personal data. However, users can request permission for specific conversations to be used in training or not. Users can do this by giving the conversation a "thumbs up" or "thumbs down" or by emailing the company. 21I'm Bryan Lynn. 22Bryan Lynn wrote this story for VOA Learning English, based on reports from The Associated Press, Google, Meta and other online sources. 23___________________________________________ 24Words in This Story 25chatbot - n. a computer program designed to interact with humans 26default - n. what exists or usually happens if no changes are made 27button - n. an image, or icon, that appears on a computer screen which the user can click to cause software to perform some kind of action 28feedback - n. information or statements of opinion about something